-
Notifications
You must be signed in to change notification settings - Fork 680
Removed support for non-per-tensor quantized relu #14788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14788
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New FailuresAs of commit b59a820 with merge base a39866c ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Summary: Not supporting quantized relu default, so removing it from ref_implementations Differential Revision: D83874866
This PR needs a
|
Summary: out shift should be int32 Reviewed By: hsharma35 Differential Revision: D83875670
…ariants Summary: Fix to just call the per tensor variants for quantized conv and quantized relu, since those are the only ones we are supporting. Differential Revision: D83873738
Summary: The original pass didn't fetch the user-provided zero point if it existed, it just assumed a hard-coded zero point. Fixed now. Reviewed By: ethansfng Differential Revision: D83873937
Summary: Not supporting quantized relu default, so removing it from ref_implementations Differential Revision: D83874866
4927981
to
b59a820
Compare
Summary: Not supporting quantized relu default, so removing it from ref_implementations
Differential Revision: D83874866